5,596 research outputs found

    A Tractable State-Space Model for Symmetric Positive-Definite Matrices

    Get PDF
    Bayesian analysis of state-space models includes computing the posterior distribution of the system's parameters as well as filtering, smoothing, and predicting the system's latent states. When the latent states wander around Rn\mathbb{R}^n there are several well-known modeling components and computational tools that may be profitably combined to achieve these tasks. However, there are scenarios, like tracking an object in a video or tracking a covariance matrix of financial assets returns, when the latent states are restricted to a curve within Rn\mathbb{R}^n and these models and tools do not immediately apply. Within this constrained setting, most work has focused on filtering and less attention has been paid to the other aspects of Bayesian state-space inference, which tend to be more challenging. To that end, we present a state-space model whose latent states take values on the manifold of symmetric positive-definite matrices and for which one may easily compute the posterior distribution of the latent states and the system's parameters, in addition to filtered distributions and one-step ahead predictions. Deploying the model within the context of finance, we show how one can use realized covariance matrices as data to predict latent time-varying covariance matrices. This approach out-performs factor stochastic volatility.Comment: 22 pages: 16 pages main manuscript, 4 pages appendix, 2 pages reference

    Flexible covariance estimation in graphical Gaussian models

    Full text link
    In this paper, we propose a class of Bayes estimators for the covariance matrix of graphical Gaussian models Markov with respect to a decomposable graph GG. Working with the WPGW_{P_G} family defined by Letac and Massam [Ann. Statist. 35 (2007) 1278--1323] we derive closed-form expressions for Bayes estimators under the entropy and squared-error losses. The WPGW_{P_G} family includes the classical inverse of the hyper inverse Wishart but has many more shape parameters, thus allowing for flexibility in differentially shrinking various parts of the covariance matrix. Moreover, using this family avoids recourse to MCMC, often infeasible in high-dimensional problems. We illustrate the performance of our estimators through a collection of numerical examples where we explore frequentist risk properties and the efficacy of graphs in the estimation of high-dimensional covariance structures.Comment: Published in at http://dx.doi.org/10.1214/08-AOS619 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Particle Learning and Smoothing

    Full text link
    Particle learning (PL) provides state filtering, sequential parameter learning and smoothing in a general class of state space models. Our approach extends existing particle methods by incorporating the estimation of static parameters via a fully-adapted filter that utilizes conditional sufficient statistics for parameters and/or states as particles. State smoothing in the presence of parameter uncertainty is also solved as a by-product of PL. In a number of examples, we show that PL outperforms existing particle filtering alternatives and proves to be a competitor to MCMC.Comment: Published in at http://dx.doi.org/10.1214/10-STS325 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Particle Learning for General Mixtures

    Get PDF
    This paper develops particle learning (PL) methods for the estimation of general mixture models. The approach is distinguished from alternative particle filtering methods in two major ways. First, each iteration begins by resampling particles according to posterior predictive probability, leading to a more efficient set for propagation. Second, each particle tracks only the "essential state vector" thus leading to reduced dimensional inference. In addition, we describe how the approach will apply to more general mixture models of current interest in the literature; it is hoped that this will inspire a greater number of researchers to adopt sequential Monte Carlo methods for fitting their sophisticated mixture based models. Finally, we show that PL leads to straight forward tools for marginal likelihood calculation and posterior cluster allocation.Business Administratio
    • …
    corecore